Goto

Collaborating Authors

 observation sequence




Learning Hawkes Processes from a handful of events

Neural Information Processing Systems

Learning the causal-interaction network of multivariate Hawkes processes is a useful task in many applications. Maximum-likelihood estimation is the most common approach to solve the problem in the presence of long observation sequences. However, when only short sequences are available, the lack of data amplifies the risk of overfitting and regularization becomes critical. Due to the challenges of hyper-parameter tuning, state-of-the-art methods only parameterize regularizers by a single shared hyper-parameter, hence limiting the power of representation of the model. To solve both issues, we develop in this work an efficient algorithm based on variational expectation-maximization. Our approach is able to optimize over an extended set of hyper-parameters. It is also able to take into account the uncertainty in the model parameters by learning a posterior distribution over them. Experimental results on both synthetic and real datasets show that our approach significantly outperforms state-of-the-art methods under short observation sequences.


Defining the Scope of Learning Analytics: An Axiomatic Approach for Analytic Practice and Measurable Learning Phenomena

Takii, Kensuke, Liang, Changhao, Ogata, Hiroaki

arXiv.org Machine Learning

Learning Analytics (LA) has rapidly expanded through practical and technological innovation, yet its foundational identity has remained theoretically under-specified. This paper addresses this gap by proposing the first axiomatic theory that formally defines the essential structure, scope, and limitations of LA. Derived from the psychological definition of learning and the methodological requirements of LA, the framework consists of five axioms specifying discrete observation, experience construction, state transition, and inference. From these axioms, we derive a set of theorems and propositions that clarify the epistemological stance of LA, including the inherent unobservability of learner states, the irreducibility of temporal order, constraints on reachable states, and the impossibility of deterministically predicting future learning. We further define LA structure and LA practice as formal objects, demonstrating the sufficiency and necessity of the axioms and showing that diverse LA approaches -- such as Bayesian Knowledge Tracing and dashboards -- can be uniformly explained within this framework. The theory provides guiding principles for designing analytic methods and interpreting learning data while avoiding naive behaviorism and category errors by establishing an explicit theoretical inference layer between observations and states. This work positions LA as a rigorous science of state transition systems based on observability, establishing the theoretical foundation necessary for the field's maturation as a scholarly discipline.





FlowHMM: Flow-based continuous hidden Markov models

Neural Information Processing Systems

Continuous hidden Markov models (HMMs) assume that observations are generated from a mixture of Gaussian densities, limiting their ability to model more complex distributions.



Supplementary Material A Generalized Bayesian Inference

Neural Information Processing Systems

Parametric Bayesian inference implicitly assumes that the generative model is well-specified, in particular, the observations are generated from the assumed likelihood model. In general, this assumption may not hold in real-world scenarios. Hence, one may wish to take into account the discrepancy between the true DGM and the assumed likelihood. To obtain Bayes-type updating rules, one needs to specify this loss function as a sum of a "data term" In particular, if one assumes the real-world likelihood, i.e. the DGM, C.1 Outline derivation of the loss in (9) To arrive at the experssion of the loss in (9), recall the formula for the beta divergence [11] D The result is proved via induction. Finally, performing multinomial resampling leads to a conditionally-i.i.d.